-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Optimize store #3061
chore: Optimize store #3061
Conversation
This PR may contain changes to database schema of one of the drivers. If you are introducing any changes to the schema, make sure the upgrade from the latest release to this change passes without any errors/issues. Please make sure the label |
You can find the image built from this PR at
Built from 19af1f1 |
6eeac6f
to
2cea2aa
Compare
c6f3249
to
d3a33da
Compare
d3a33da
to
ae5cd5f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks so much!
Quick question - does doing
await sleepAsync(0.milliseconds)
pollute the pending Futures?
My understanding is that it gives control back to the event loop and the sleep's future is cleaned promptly, there shouldn't be more than one instance of the sleepAsync(0.milliseconds)
in the pending Futures at once. But maybe I'm wrong
Co-authored-by: gabrielmer <[email protected]>
Yes I understood it the same way but after some analysis, I got the impression that it doesn't work exactly that way |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Great work!
pool.conns[i].busy = false | ||
|
||
const SlowQueryThresholdInNanoSeconds = 2_000_000_000 | ||
const SlowQueryThresholdInNanoSeconds = 1_000_000_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a hint, you may like it.
const SlowQueryThresholdInNanoSeconds = 1.seconds
... and use it if needed like...
SlowQueryThresholdInNanoSeconds.nanos()
I feel it more readable instead of counting the zeros :-)
WDYT would it work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's much better that way thanks! I've applied the suggestion in 54e635d
Description
We have three big-time consumption components in
store
:This PR enhances the point 2, because we've been applying an incorrect
async
approach, which was to wait for a certain state with the following technique:or
Doing that has a very big impact on performance because we were adding a massive amount of future objects to the async queue. @arnetheduck - correct me if that statement is not accurate.
Therefore, in this PR we are starting to apply a better
async
approach to properly coordinate tasks.Aside from that, we are simplifying
pgasyncpool.nim
and suggesting a better wrapper for theDbConn
object.We are also optimizing the following query so that we use the
messages_lookup
table instead:SELECT timestamp FROM messages WHERE messageHash = $1